9 research outputs found

    A Novel Family of Adaptive Filtering Algorithms Based on The Logarithmic Cost

    Get PDF
    We introduce a novel family of adaptive filtering algorithms based on a relative logarithmic cost. The new family intrinsically combines the higher and lower order measures of the error into a single continuous update based on the error amount. We introduce important members of this family of algorithms such as the least mean logarithmic square (LMLS) and least logarithmic absolute difference (LLAD) algorithms that improve the convergence performance of the conventional algorithms. However, our approach and analysis are generic such that they cover other well-known cost functions as described in the paper. The LMLS algorithm achieves comparable convergence performance with the least mean fourth (LMF) algorithm and extends the stability bound on the step size. The LLAD and least mean square (LMS) algorithms demonstrate similar convergence performance in impulse-free noise environments while the LLAD algorithm is robust against impulsive interferences and outperforms the sign algorithm (SA). We analyze the transient, steady state and tracking performance of the introduced algorithms and demonstrate the match of the theoretical analyzes and simulation results. We show the extended stability bound of the LMLS algorithm and analyze the robustness of the LLAD algorithm against impulsive interferences. Finally, we demonstrate the performance of our algorithms in different scenarios through numerical examples.Comment: Submitted to IEEE Transactions on Signal Processin

    Stochastic Subgradient Algorithms for Strongly Convex Optimization over Distributed Networks

    Full text link
    We study diffusion and consensus based optimization of a sum of unknown convex objective functions over distributed networks. The only access to these functions is through stochastic gradient oracles, each of which is only available at a different node, and a limited number of gradient oracle calls is allowed at each node. In this framework, we introduce a convex optimization algorithm based on the stochastic gradient descent (SGD) updates. Particularly, we use a carefully designed time-dependent weighted averaging of the SGD iterates, which yields a convergence rate of O(NNT)O\left(\frac{N\sqrt{N}}{T}\right) after TT gradient updates for each node on a network of NN nodes. We then show that after TT gradient oracle calls, the average SGD iterate achieves a mean square deviation (MSD) of O(NT)O\left(\frac{\sqrt{N}}{T}\right). This rate of convergence is optimal as it matches the performance lower bound up to constant terms. Similar to the SGD algorithm, the computational complexity of the proposed algorithm also scales linearly with the dimensionality of the data. Furthermore, the communication load of the proposed method is the same as the communication load of the SGD algorithm. Thus, the proposed algorithm is highly efficient in terms of complexity and communication load. We illustrate the merits of the algorithm with respect to the state-of-art methods over benchmark real life data sets and widely studied network topologies

    Improved convergence performance of adaptive algorithms through logarithmic cost

    Get PDF
    We present a novel family of adaptive filtering algorithms based on a relative logarithmic cost. The new family intrinsically combines the higher and lower order measures of the error into a single continuous update based on the error amount. We introduce the least mean logarithmic square (LMLS) algorithm that achieves comparable convergence performance with the least mean fourth (LMF) algorithm and overcomes the stability issues of the LMF algorithm. In addition, we introduce the least logarithmic absolute difference (LLAD) algorithm. The LLAD and least mean square (LMS) algorithms demonstrate similar convergence performance in impulse-free noise environments while the LLAD algorithm is robust against impulsive interference and outperforms the sign algorithm (SA). © 2014 IEEE

    Adaptive hierarchical space partitioning for online classification

    Get PDF
    We propose an online algorithm for supervised learning with strong performance guarantees under the empirical zero-one loss. The proposed method adaptively partitions the feature space in a hierarchical manner and generates a powerful finite combination of basic models. This provides algorithm to obtain a strong classification method which enables it to create a linear piecewise classifier model that can work well under highly non-linear complex data. The introduced algorithm also have scalable computational complexity that scales linearly with dimension of the feature space, depth of the partitioning and number of processed data. Through experiments we show that the introduced algorithm outperforms the state-of-the-art ensemble techniques over various well-known machine learning data sets. © 2016 IEEE

    Sequential nonlinear regression via context trees [Baǧlam aǧaçlari ile ardişik doǧrusal olmayan baǧlanim]

    No full text
    In this paper, we consider the problem of sequential nonlinear regression and introduce an efficient learning algorithm using context trees. Specifically, the regressor space is partitioned and the resulting regions are represented by a context tree. In each region, we assign an independent regression algorithm and the outputs of the all possible nonlinear models defined on the context tree are adaptively combined with a computational complexity linear in the number of nodes. The upper bounds on the performance of the algorithm are also investigated without making any statistical assumptions on the data. A numerical example is provided to illustrate the theoretical results. © 2014 IEEE

    Competitive linear MMSE estimation under structured data uncertainties [Yapisal veri belirsizlikleri altinda yarişmaci doǧrusal mmse kestirim]

    No full text
    In this paper, we consider the linear estimation problem under structured data uncertainties. A robust algorithm is presented under bounded uncertainties under the mean square error (MSE) criterion. The performance of the linear estimator is defined relative to the performance of the linear minimum MSE (MMSE) estimator tuned to the underlying unknown data uncertainties, i.e., the introduced algorithm has a competitive framework. Then, using this relative performance measure, we find the estimator that minimizes this cost for the worst-case system model. We show that finding this estimator can equivalently be cast as a semidefinite programming (SDP) problem. Numerical examples are provided to illustrate the theoretical results. © 2014 IEEE

    Robust set-membership filtering algorithms against impulsive noise [Dürtün gürültüye karşi saǧlam küme üyeliǧi süzgeç algoritmalari]

    No full text
    In this paper, we propose robust set-membership filtering algorithms against impulsive noise. Firstly, we introduce set-membership normalized least absolute difference algorithm (SM-NLAD). This algorithm provides robustness against impulsive noise through pricing the absolute error instead of the square. Then, in order to achieve comparable convergence performance in the impulse-free noise environments, we propose the set-membership normalized least logarithmic absolute difference algorithm (SM-NLLAD) through the logarithmic cost framework. Logarithmic cost function involves the absolute value of the error for large error values and the square of the error for small error values intrinsically. Finally, in the numerical examples, we show the robustness of our algorithms against impulsive noise and their comparable performance in the impulse-free noise environments. © 2014 IEEE

    A Comprehensive Approach to Universal Piecewise Nonlinear Regression Based on Trees

    No full text
    corecore